Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Int J Neural Syst ; 34(1): 2450006, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38063378

RESUMO

The stable decoding of movement parameters using neural activity is crucial for the success of brain-machine interfaces (BMIs). However, neural activity can be unstable over time, leading to changes in the parameters used for decoding movement, which can hinder accurate movement decoding. To tackle this issue, one approach is to transfer neural activity to a stable, low-dimensional manifold using dimensionality reduction techniques and align manifolds across sessions by maximizing correlations of the manifolds. However, the practical use of manifold stabilization techniques requires knowledge of the true subject intentions such as target direction or behavioral state. To overcome this limitation, an automatic unsupervised algorithm is proposed that determines movement target intention before manifold alignment in the presence of manifold rotation and scaling across sessions. This unsupervised algorithm is combined with a dimensionality reduction and alignment method to overcome decoder instabilities. The effectiveness of the BMI stabilizer method is represented by decoding the two-dimensional (2D) hand velocity of two rhesus macaque monkeys during a center-out-reaching movement task. The performance of the proposed method is evaluated using correlation coefficient and R-squared measures, demonstrating higher decoding performance compared to a state-of-the-art unsupervised BMI stabilizer. The results offer benefits for the automatic determination of movement intents in long-term BMI decoding. Overall, the proposed method offers a promising automatic solution for achieving stable and accurate movement decoding in BMI applications.


Assuntos
Interfaces Cérebro-Computador , Córtex Motor , Animais , Macaca mulatta , Movimento , Algoritmos , Mãos
2.
Bioinformatics ; 38(2): 469-475, 2022 01 03.
Artigo em Inglês | MEDLINE | ID: mdl-34979024

RESUMO

MOTIVATION: The aim of quantitative structure-activity prediction (QSAR) studies is to identify novel drug-like molecules that can be suggested as lead compounds by means of two approaches, which are discussed in this article. First, to identify appropriate molecular descriptors by focusing on one feature-selection algorithms; and second to predict the biological activities of designed compounds. Recent studies have shown increased interest in the prediction of a huge number of molecules, known as Big Data, using deep learning models. However, despite all these efforts to solve critical challenges in QSAR models, such as over-fitting, massive processing procedures, is major shortcomings of deep learning models. Hence, finding the most effective molecular descriptors in the shortest possible time is an ongoing task. One of the successful methods to speed up the extraction of the best features from big datasets is the use of least absolute shrinkage and selection operator (LASSO). This algorithm is a regression model that selects a subset of molecular descriptors with the aim of enhancing prediction accuracy and interpretability because of removing inappropriate and irrelevant features. RESULTS: To implement and test our proposed model, a random forest was built to predict the molecular activities of Kaggle competition compounds. Finally, the prediction results and computation time of the suggested model were compared with the other well-known algorithms, i.e. Boruta-random forest, deep random forest and deep belief network model. The results revealed that improving output correlation through LASSO-random forest leads to appreciably reduced implementation time and model complexity, while maintaining accuracy of the predictions. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Big Data , Relação Quantitativa Estrutura-Atividade , Algoritmos , Análise de Dados
3.
IEEE Trans Med Imaging ; 40(3): 865-878, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33232227

RESUMO

This paper proposes a mixed low-rank approximation and second-order tensor-based total variation (LRSOTTV) approach for the super-resolution and denoising of retinal optical coherence tomography (OCT) images through effective utilization of nonlocal spatial correlations and local smoothness properties. OCT imaging relies on interferometry, which explains why OCT images suffer from a high level of noise. In addition, data subsampling is conducted during OCT A-scan and B-scan acquisition. Therefore, using effective super-resolution algorithms is necessary for reconstructing high-resolution clean OCT images. In this paper, a low-rank regularization approach is proposed for exploiting nonlocal self-similarity prior to OCT image reconstruction. To benefit from the advantages of the redundancy of multi-slice OCT data, we construct a third-order tensor by extracting the nonlocal similar three-dimensional blocks and grouping them by applying the k-nearest-neighbor method. Next, the nuclear norm is used as a regularization term to shrink the singular values of the constructed tensor in the non-local correlation direction. Further, the regularization approaches of the first-order tensor-based total variation (FOTTV) and SOTTV are proposed for better preservation of retinal layers and suppression of artifacts in OCT images. The alternative direction method of multipliers (ADMM) technique is then used to solve the resulting optimization problem. Our experiments show that integrating SOTTV instead of FOTTV into a low-rank approximation model can achieve noticeably improved results. Our experimental results on the denoising and super-resolution of OCT images demonstrate that the proposed model can provide images whose numerical and visual qualities are higher than those obtained by using state-of-the-art methods.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia de Coerência Óptica , Algoritmos , Artefatos , Retina/diagnóstico por imagem
4.
Comput Biol Chem ; 86: 107269, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32413830

RESUMO

Protein kinases are enzymes acting as a source of phosphate through ATP to regulate protein biological activities by phosphorylating groups of specific amino acids. For that reason, inhibiting protein kinases with an active small molecule plays a significant role in cancer treatment. To achieve this aim, computational drug design, especially QSAR model, is one of the best economical approaches to reduce time and save in costs. In this respect, active inhibitors are attempted to be distinguished from inactive ones using hybrid QSAR model. Therefore, genetic algorithm and K-Nearest Neighbor method were suggested as a dimensional reduction and classification model, respectively. Finally, to evaluate the proposed model's performance, support vector machine and Naïve Bayesian algorithm were examined. The outputs of the proposed model demonstrated significant superiority to other QSAR models.


Assuntos
Antineoplásicos/classificação , Inibidores de Proteínas Quinases/classificação , Algoritmos , Antineoplásicos/química , Teorema de Bayes , Inibidores de Proteínas Quinases/química , Relação Quantitativa Estrutura-Atividade
5.
Artigo em Inglês | MEDLINE | ID: mdl-32275595

RESUMO

In this paper, a new statistical model is proposed for the single image super-resolution of retinal Optical Coherence Tomography (OCT) images. OCT imaging relies on interfero-metry, which explains why OCT images suffer from a high level of noise. Moreover, data subsampling is carried out during the acquisition of OCT A-scans and B-scans. So, it is necessary to utilize effective super-resolution algorithms to reconstruct high-resolution clean OCT images. In this paper, a nonlocal sparse model-based Bayesian framework is proposed for OCT restoration. For this reason, by characterizing nonlocal patches with similar structures, known as a group, the sparse coefficients of each group of OCT images are modeled by the scale mixture models. In this base, the coefficient vector is decomposed into the point-wise product of a random vector and a positive scaling variable. Estimation of the sparse coefficients depends on the proposed distribution for the random vector and scaling variable where the Laplacian random vector and Generalized Extreme-Value (GEV) scale parameter (Laplacian+GEV model) show the best goodness of fit for each group of OCT images. Finally, a new OCT super-resolution method based on this new scale mixture model is introduced, where the maximum a posterior estimation of both sparse coefficients and scaling variables are calculated efficiently by applying an alternating minimization method. Our experimental results prove that the proposed OCT super-resolution method based on the Laplacian+GEV model outperforms other competing methods in terms of both subjective and objective visual qualities.

6.
J Med Signals Sens ; 9(1): 1-14, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30967985

RESUMO

BACKGROUND: Macular disorders, such as diabetic macular edema (DME) and age-related macular degeneration (AMD) are among the major ocular diseases. Having one of these diseases can lead to vision impairments or even permanent blindness in a not-so-long time span. So, the early diagnosis of these diseases are the main goals for researchers in the field. METHODS: This study is designed in order to present a comparative analysis on the recent convolutional mixture of experts (CMoE) models for distinguishing normal macular OCT from DME and AMD. For this purpose, we considered three recent CMoE models called Mixture ensemble of convolutional neural networks (ME-CNN), Multi-scale Convolutional Mixture of Experts (MCME), and Wavelet-based Convolutional Mixture of Experts (WCME) models. For this research study, the models were evaluated on a database of three different macular OCT sets. Two first OCT sets were acquired by Heidelberg imaging systems consisting of 148 and 45 subjects respectively and set3 was constituted of 384 Bioptigen OCT acquisitions. To provide better performance insight into the CMoE ensembles, we extensively analyzed the models based on the 5-fold cross-validation method and various classification measures such as precision and average area under the ROC curve (AUC). RESULTS: Experimental evaluations showed that the MCME and WCME outperformed the ME-CNN model and presented overall precisions of 98.14% and 96.06% for aligned OCTs respectively. For non-aligned retinal OCTs, these values were 93.95% and 95.56%. CONCLUSION: Based on the comparative analysis, although the MCME model outperformed the other CMoE models in the analysis of aligned retinal OCTs, the WCME offers a robust model for diagnosis of non-aligned retinal OCTs. This allows having a fast and robust computer-aided system in macular OCT imaging which does not rely on the routine computerized processes such as denoising, segmentation of retinal layers, and also retinal layers alignment.

8.
IEEE Trans Med Imaging ; 37(4): 1024-1034, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29610079

RESUMO

Computer-aided diagnosis (CAD) of retinal pathologies is a current active area in medical image analysis. Due to the increasing use of retinal optical coherence tomography (OCT) imaging technique, a CAD system in retinal OCT is essential to assist ophthalmologist in the early detection of ocular diseases and treatment monitoring. This paper presents a novel CAD system based on a multi-scale convolutional mixture of expert (MCME) ensemble model to identify normal retina, and two common types of macular pathologies, namely, dry age-related macular degeneration, and diabetic macular edema. The proposed MCME modular model is a data-driven neural structure, which employs a new cost function for discriminative and fast learning of image features by applying convolutional neural networks on multiple-scale sub-images. MCME maximizes the likelihood function of the training data set and ground truth by considering a mixture model, which tries also to model the joint interaction between individual experts by using a correlated multivariate component for each expert module instead of only modeling the marginal distributions by independent Gaussian components. Two different macular OCT data sets from Heidelberg devices were considered for the evaluation of the method, i.e., a local data set of OCT images of 148 subjects and a public data set of 45 OCT acquisitions. For comparison purpose, we performed a wide range of classification measures to compare the results with the best configurations of the MCME method. With the MCME model of four scale-dependent experts, the precision rate of 98.86%, and the area under the receiver operating characteristic curve (AUC) of 0.9985 were obtained on average.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Macula Lutea/diagnóstico por imagem , Redes Neurais de Computação , Tomografia de Coerência Óptica/métodos , Algoritmos , Bases de Dados Factuais , Humanos , Degeneração Macular/diagnóstico por imagem
9.
J Biomed Opt ; 23(3): 1-10, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29564864

RESUMO

The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Macula Lutea/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Algoritmos , Humanos , Redes Neurais de Computação , Análise de Ondaletas
10.
Artigo em Inglês | MEDLINE | ID: mdl-26671813

RESUMO

In this experiment, a gene selection technique was proposed to select a robust gene signature from microarray data for prediction of breast cancer recurrence. In this regard, a hybrid scoring criterion was designed as linear combinations of the scores that were determined in the mutual information (MI) domain and protein-protein interactions network. Whereas, the MI-based score represents the complementary information between the selected genes for outcome prediction; and the number of connections in the PPI network between the selected genes builds the PPI-based score. All genes were scored by using the proposed function in a hybrid forward-backward gene-set selection process to select the optimum biomarker-set from the gene expression microarray data. The accuracy and stability of the finally selected biomarkers were evaluated by using five-fold cross-validation (CV) to classify available data on breast cancer patients into two cohorts of poor and good prognosis. The results showed an appealing improvement in the cross-dataset accuracy in comparison with similar studies whenever we applied a primary signature, which was selected from one dataset, to predict survival in other independent datasets. Moreover, the proposed method demonstrated 58-92 percent overlap between 50-genes signatures, which were selected from seven independent datasets individually.


Assuntos
Biomarcadores Tumorais/genética , Neoplasias da Mama/genética , Neoplasias da Mama/mortalidade , Perfilação da Expressão Gênica/métodos , Proteínas de Neoplasias/genética , Recidiva Local de Neoplasia/genética , Recidiva Local de Neoplasia/mortalidade , Neoplasias da Mama/diagnóstico , Feminino , Genes Neoplásicos/genética , Predisposição Genética para Doença/epidemiologia , Predisposição Genética para Doença/genética , Humanos , Recidiva Local de Neoplasia/diagnóstico , Prevalência , Prognóstico , Reprodutibilidade dos Testes , Medição de Risco/métodos , Sensibilidade e Especificidade , Análise de Sobrevida
11.
J Educ Health Promot ; 4: 21, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25861666

RESUMO

BACKGROUND: Admission includes written and interview at universities belonging to the ministry of the health and medical education of Iran at PhD level. In the present work, it was tried to find out the likelihood of interview performance of different candidates with their teaching experience in Iranian national medical PhD admission in the year 1386-87. METHODS AND MATERIALS: In this study, applicants' exam results were extracted from their score workbooks for year 86-87. PhD applicants' categories were public (ordinary) and employed lecturers. Invited numbers of candidates for interview were 556 from 29 different fields of study. As the number of written subjects were not the same within different fields of study, at the first, each group score distribution were normalized to one and then combined together for final consideration. RESULTS: Accept and reject percentage within public applicants were 45.1 and 54.9, respectively, while the accept percentage within lecturer applicants was 66 and the reject was 34 respectively. Scores of all 29 groups were combined after normalization. The overall performance including test plus interview for public and lecturers were 1.02 ± 0.12 and 0.95 ± 0.1, respectively. The average and standard deviation of test exam of public and lecturer were 1.04 ± 0.16 and 0.91 ± 0.12, respectively. The average and standard deviation of interview exam of public applicants and lecturers applicants were 0.98 ± 0.18 and 1.04 ± 0.17, respectively. CONCLUSION: As results show, the interview performance of lecturers is better than public applicants. Unbalanced acceptance rate amongst lecturers was increased due to the hold of reservation toward interview and due to their higher results gain during interview. If the test performance was a reliable measure for viability of applicant, this reservation would change the acceptance rate close to balance.

12.
J Med Signals Sens ; 3(1): 22-30, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24083134

RESUMO

In this study, we considered some competitive learning methods including hard competitive learning and soft competitive learning with/without fixed network dimensionality for reliability analysis in microarrays. In order to have a more extensive view, and keeping in mind that competitive learning methods aim at error minimization or entropy maximization (different kinds of function optimization), we decided to investigate the abilities of mixture decomposition schemes. Therefore, we assert that this study covers the algorithms based on function optimization with particular insistence on different competitive learning methods. The destination is finding the most powerful method according to a pre-specified criterion determined with numerical methods and matrix similarity measures. Furthermore, we should provide an indication showing the intrinsic ability of the dataset to form clusters before we apply a clustering algorithm. Therefore, we proposed Hopkins statistic as a method for finding the intrinsic ability of a data to be clustered. The results show the remarkable ability of Rayleigh mixture model in comparison with other methods in reliability analysis task.

13.
Adv Biomed Res ; 2: 26, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23977654

RESUMO

BACKGROUND: Up to date different methods have been used in order to dimensions reduction, classification, clustering and prediction of cancers based on gene expression profiling. The aim of this study is extracting most significant genes and classifying of Diffuse Large B-cell Lymphoma (DLBCL) patients on the basis of their gene expression profiles. MATERIALS AND METHODS: We studied 40 DLBCL patients and 4026 genes. We utilized Artificial Neural Network (ANN) for classification of patients in two groups: Germinal center and Activated like. As we were faced with low number of patients (40) and numerous genes (4026), we tried to deploy one optimum network and achieve to minimum error. Moreover we used signal to noise (S/N) ratio as a main tool for dimension reduction. We tried to select suitable training data and so to train just one network instead of 26 networks. Finally, we extracted two most significant genes. RESULT: In this study two most significant genes based on their S/N ratios were selected. After selection of suitable training samples, the training and testing error were 0 and 7% respectively. CONCLUSION: We have shown that the use of two most significant genes based on their S/N ratios and selection of suitable training samples can lead to classify DLBCL patients with a rather good result. Actually with the aid of mentioned methods we could compensate lack of enough number of patients, improve accuracy of classifying and reduce complication of computations and so running time.

14.
J Med Signals Sens ; 1(2): 99-106, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-22606664

RESUMO

As the T-wave section in electrocardiogram (ECG) illustrates the repolarization phase of heart activity, the information which is accumulated in this section is so significant that it can explain the proper operation of electrical activities in heart. Long QT syndrome (LQT) and T-Wave Alternans (TWA) have imperceptible effects on time and amplitude of T-wave interval. Therefore, T-wave shapes of these diseases are similar to normal beats. Consequently, several T-wave features can be used to classify LQT and TWA diseases from normal ECGs. Totally, 22 features including 17 morphological and 5 wavelet features have been extracted from T-wave to show the ability of this section to recognize the normal and abnormal records. This recognition can be implemented by pre-processing, T-wave feature extraction and artificial neural network (ANN) classifier using Multi Layer Perceptron (MLP). The ECG signals obtained from 142 patients (40 normal, 47 LQT and 55 TWA) are processed and classified from MIT-BIH database. The specificity factor for normal, LQT, and TWA classifications are 99.89%, 99.90%, and 99.43%, respectively. T-wave features are one of the most important descriptors for LQT syndrome, Normal and TWA of ECG classification. The morphological features of T-wave have also more effect on the classification performance in LQT, TWA and normal samples compared with the wavelet features.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...